Convergence Analysis of Alternating Direction Method of Multipliers for a Family of Nonconvex Problems
Mingyi Hong 洪明毅 (University of Minnesota)
Abstract: The alternating direction method of multipliers (ADMM) is widely used to solve large-scale linearly constrained optimization problems, convex or nonconvex, in many engineering fields. However there is a general lack of theoretical understanding of the algorithm when the objective function is nonconvex. In this work we analyze the convergence of the ADMM for solving certain nonconvex consensus and sharing problems. By using a three-step argument, we show that the classical ADMM converges to the set of stationary solutions, provided that the penalty parameter in the augmented Lagrangian is chosen to be sufficiently large. For the sharing problems, we show that the ADMM is convergent regardless of the number of variable blocks. Our analysis does not impose any assumptions on the iterates generated by the algorithm, and is broadly applicable to many ADMM variants involving proximal update rules and various flexible block selection rules. Finally, we discuss a few generalizations of the three-step analysis to a broader class of algorithms, with applications in signal processing and machine learning.
Mathematics
Audience: researchers in the topic
| Organizers: | Shing Tung Yau, Shiu-Yuen Cheng, Sen Hu*, Mu-Tao Wang |
| *contact for this listing |
